38 resultados para Lot sizing and scheduling

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and AimsTo compare endoscopy and pathology sizing in a large population-based series of colorectal adenomas and to evaluate the implications for patient stratification into surveillance colonoscopy.MethodsEndoscopy and pathology sizes available from intact adenomas removed at colonoscopies performed as part of the Northern Ireland Bowel Cancer Screening Programme, from 2010 to 2015, were included in this study. Chi-squared tests were applied to compare size categories in relation to clinicopathological parameters and colonoscopy surveillance strata according to current American Gastroenterology Association and British Society of Gastroenterology guidelines.ResultsA total of 2521 adenomas from 1467 individuals were included. There was a trend toward larger endoscopy than pathology sizing in 4 of the 5 study centers, but overall sizing concordance was good. Significantly greater clustering with sizing to the nearest 5 mm was evident in endoscopy versus pathology sizing (30% vs 19%, p<0.001), which may result in lower accuracy. Applying a 10-mm cut-off relevant to guidelines on risk stratification, 7.3% of all adenomas and 28.3% of those 8 to 12 mm in size had discordant endoscopy and pathology size categorization. Depending upon which guidelines are applied, 4.8% to 9.1% of individuals had differing risk stratification for surveillance recommendations, with the use of pathology sizing resulting in marginally fewer recommended surveillance colonoscopies.ConclusionsChoice of pathology or endoscopy approaches to determine adenoma size will potentially influence surveillance colonoscopy follow-up in 4.8% to 9.1% of individuals. Pathology sizing appears more accurate than endoscopy sizing, and preferential use of pathology size would result in a small, but clinically important, decreased burden on surveillance colonoscopy demand. Careful endoscopy sizing is required for adenomas removed piecemeal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The utilization of the computational Grid processor network has become a common method for researchers and scientists without access to local processor clusters to avail of the benefits of parallel processing for compute-intensive applications. As a result, this demand requires effective and efficient dynamic allocation of available resources. Although static scheduling and allocation techniques have proved effective, the dynamic nature of the Grid requires innovative techniques for reacting to change and maintaining stability for users. The dynamic scheduling process requires quite powerful optimization techniques, which can themselves lack the performance required in reaction time for achieving an effective schedule solution. Often there is a trade-off between solution quality and speed in achieving a solution. This paper presents an extension of a technique used in optimization and scheduling which can provide the means of achieving this balance and improves on similar approaches currently published.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A key issue in the design of next generation Internet routers and switches will be provision of traffic manager (TM) functionality in the datapaths of their high speed switching fabrics. A new architecture that allows dynamic deployment of different TM functions is presented. By considering the processing requirements of operations such as policing and congestion, queuing, shaping and scheduling, a solution has been derived that is scalable with a consistent programmable interface. Programmability is achieved using a function computation unit which determines the action (e.g. drop, queue, remark, forward) based on the packet attribute information and a memory storage part. Results of a Xilinx Virtex-5 FPGA reference design are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A scheduling method for implementing a generic linear QR array processor architecture is presented. This improves on previous work. It also considerably simplifies the derivation of schedules for a folded linear system, where detailed account has to be taken of processor cell latency. The architecture and scheduling derived provide the basis of a generator for the rapid design of System-on-a-Chip (SoC) cores for QR decomposition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Task dataflow languages simplify the specification of parallel programs by dynamically detecting and enforcing dependencies between tasks. These languages are, however, often restricted to a single level of parallelism. This language design is reflected in the runtime system, where a master thread explicitly generates a task graph and worker threads execute ready tasks and wake-up their dependents. Such an approach is incompatible with state-of-the-art schedulers such as the Cilk scheduler, that minimize the creation of idle tasks (work-first principle) and place all task creation and scheduling off the critical path. This paper proposes an extension to the Cilk scheduler in order to reconcile task dependencies with the work-first principle. We discuss the impact of task dependencies on the properties of the Cilk scheduler. Furthermore, we propose a low-overhead ticket-based technique for dependency tracking and enforcement at the object level. Our scheduler also supports renaming of objects in order to increase task-level parallelism. Renaming is implemented using versioned objects, a new type of hyper object. Experimental evaluation shows that the unified scheduler is as efficient as the Cilk scheduler when tasks have no dependencies. Moreover, the unified scheduler is more efficient than SMPSS, a particular implementation of a task dataflow language.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Good performance characterizes project success and value for money. However, performance problems are not uncommon in project management. Incentivization is generally recognized as a strategy of addressing performance problems. This chapter aims to explore incentive mechanisms and their impact on project performance. It is mainly based on the use of incentives in construction and engineering projects. The same principles apply to project management in other industry sectors. Incentivization can be used in such performance areas as time, cost, quality, safety and environment. A client has different ways of incentivizing his contractor’s performance, e.g. (1) a single incentive or multiple incentives; and (2) incentives or disincentives or a combination of both. The establishment of incentive mechanisms proves to have a significant potential for relationship development, process enhancement and performance improvement. In order to ensure the success of incentive mechanisms, both contractors and clients need to make extra efforts. As a result, a link is developed among incentive mechanisms, project management system and project performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sediment particle size analysis (PSA) is routinely used to support benthic macrofaunal community distribution data in habitat mapping and Ecological Status (ES) assessment. No optimal PSA Method to explain variability in multivariate macrofaunal distribution has been identified nor have the effects of changing sampling strategy been examined. Here, we use benthic macrofaunal and PSA grabs from two embayments in the south of Ireland. Four frequently used PSA Methods and two common sampling strategies are applied. A combination of laser particle sizing and wet/dry sieving without peroxide pre-treatment to remove organics was identified as the optimal Method for explaining macrofaunal distributions. ES classifications and EUNIS sediment classification were robust to changes in PSA Method. Fauna and PSA samples returned from the same grab sample significantly decreased macrofaunal variance explained by PSA and caused ES to be classified as lower. Employing the optimal PSA Method and sampling strategy will improve benthic monitoring. © 2012 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The astonishing development of diverse and different hardware platforms is twofold: on one side, the challenge for the exascale performance for big data processing and management; on the other side, the mobile and embedded devices for data collection and human machine interaction. This drove to a highly hierarchical evolution of programming models. GVirtuS is the general virtualization system developed in 2009 and firstly introduced in 2010 enabling a completely transparent layer among GPUs and VMs. This paper shows the latest achievements and developments of GVirtuS, now supporting CUDA 6.5, memory management and scheduling. Thanks to the new and improved remoting capabilities, GVirtus now enables GPU sharing among physical and virtual machines based on x86 and ARM CPUs on local workstations,computing clusters and distributed cloud appliances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A generic architecture for implementing a QR array processor in silicon is presented. This improves on previous research by considerably simplifying the derivation of timing schedules for a QR system implemented as a folded linear array, where account has to be taken of processor cell latency and timing at the detailed circuit level. The architecture and scheduling derived have been used to create a generator for the rapid design of System-on-a-Chip (SoC) cores for QR decomposition. This is demonstrated through the design of a single-chip architecture for implementing an adaptive beamformer for radar applications. Published as IEEE Trans Circuits and Systems Part II, Analog and Digital Signal Processing, April 2003 NOT Express Briefs. Parts 1 and II of Journal reorganised since then into Regular Papers and Express briefs

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential that laser based particle accelerators offer to solve sizing and cost issues arising with conventional proton therapy has generated great interest in the understanding and development of laser ion acceleration, and in investigating the radiobiological effects induced by laser accelerated ions. Laser-driven ions are produced in bursts of ultra-short duration resulting in ultra-high dose rates, and an investigation at Queen's University Belfast was carried out to investigate this virtually unexplored regime of cell rdaiobiology. This employed the TARANIS terawatt laser producing protons in the MeV range for proton irradiation, with dose rates exceeding 10 Gys on a single exposure. A clonogenic assay was implemented to analyse the biological effect of proton irradiation on V79 cells, which, when compared to data obtained with the same cell line irradiated with conventionally accelerated protons, was found to show no significant difference. A Relative Biological effectiveness of 1.4±0.2 at 10 % Survival Fraction was estimated from a comparison with a 225 kVp X-ray source. © 2013 SPIE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-core and many-core platforms are becoming increasingly heterogeneous and asymmetric. This significantly increases the porting and tuning effort required for parallel codes, which in turn often leads to a growing gap between peak machine power and actual application performance. In this work a first step toward the automated optimization of high level skeleton-based parallel code is discussed. The paper presents an abstract annotation model for skeleton programs aimed at formally describing suitable mapping of parallel activities on a high-level platform representation. The derived mapping and scheduling strategies are used to generate optimized run-time code. © 2013 Springer-Verlag Berlin Heidelberg.